skip to main content
US FlagAn official website of the United States government
dot gov icon
Official websites use .gov
A .gov website belongs to an official government organization in the United States.
https lock icon
Secure .gov websites use HTTPS
A lock ( lock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.


Search for: All records

Creators/Authors contains: "Ganguli, Surya"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Given the fundamental importance of combinatorial optimization across many diverse domains, there has been widespread interest in the development of unconventional physical computing architectures that can deliver better solutions with lower resource costs. However, a theoretical understanding of their performance remains elusive. We develop such understanding for the case of the coherent Ising machine (CIM), a network of optical parametric oscillators that can be applied to any quadratic unconstrained binary optimization problem. We focus on how the CIM finds low-energy solutions of the Sherrington-Kirkpatrick spin glass. As the laser gain of this system is annealed, the CIM interpolates between gradient descent on coupled soft spins to descent on coupled binary spins. By combining the Kac-Rice formula, the replica method, and supersymmetry breaking, we develop a detailed understanding of the evolving geometry of the high-dimensional energy landscape of the CIM as the laser gain increases, finding several phase transitions in the landscape, from flat to rough to rigid. Additionally, we develop a novel cavity method that provides a geometric interpretation of supersymmetry breaking in terms of the reactivity of a rough landscape to specific external perturbations. Our energy landscape theory successfully matches numerical experiments, provides geometric insights into the principles of CIM operation, and yields optimal annealing schedules. Published by the American Physical Society2024 
    more » « less
  2. We combine stochastic thermodynamics, large deviation theory, and information theory to derive fundamental limits on the accuracy with which single cell receptors can estimate external concentrations. As expected, if the estimation is performed by an ideal observer of the entire trajectory of receptor states, then no energy consuming nonequilibrium receptor that can be divided into bound and unbound states can outperform an equilibrium two- state receptor. However, when the estimation is performed by a simple observer that measures the fraction of time the receptor is bound, we derive a fundamental limit on the accuracy of general nonequilibrium receptors as a function of energy consumption. We further derive and exploit explicit formulas to numerically estimate a Pareto-optimal tradeoff between accuracy and energy. We find this tradeoff can be achieved by nonuniform ring receptors with a number of states that necessarily increases with energy. Our results yield a thermodynamic uncertainty relation for the time a physical system spends in a pool of states and generalize the classic Berg- Purcell limit [H. C. Berg and E. M. Purcell, Biophys. J. 20, 193 (1977)] on cellular sensing along multiple dimensions. 
    more » « less
  3. Understanding the neural basis of the remarkable human cognitive capacity to learn novel concepts from just one or a few sensory experiences constitutes a fundamental problem. We propose a simple, biologically plausible, mathematically tractable, and computationally powerful neural mechanism for few-shot learning of naturalistic concepts. We posit that the concepts that can be learned from few examples are defined by tightly circumscribed manifolds in the neural firing-rate space of higher-order sensory areas. We further posit that a single plastic downstream readout neuron learns to discriminate new concepts based on few examples using a simple plasticity rule. We demonstrate the computational power of our proposal by showing that it can achieve high few-shot learning accuracy on natural visual concepts using both macaque inferotemporal cortex representations and deep neural network (DNN) models of these representations and can even learn novel visual concepts specified only through linguistic descriptors. Moreover, we develop a mathematical theory of few-shot learning that links neurophysiology to predictions about behavioral outcomes by delineating several fundamental and measurable geometric properties of neural representations that can accurately predict the few-shot learning performance of naturalistic concepts across all our numerical simulations. This theory reveals, for instance, that high-dimensional manifolds enhance the ability to learn new concepts from few examples. Intriguingly, we observe striking mismatches between the geometry of manifolds in the primate visual pathway and in trained DNNs. We discuss testable predictions of our theory for psychophysics and neurophysiological experiments. 
    more » « less
  4. Blohm, Gunnar (Ed.)
    Neural circuits consist of many noisy, slow components, with individual neurons subject to ion channel noise, axonal propagation delays, and unreliable and slow synaptic transmission. This raises a fundamental question: how can reliable computation emerge from such unreliable components? A classic strategy is to simply average over a population ofNweakly-coupled neurons to achieve errors that scale as 1 / N . But more interestingly, recent work has introduced networks of leaky integrate-and-fire (LIF) neurons that achieve coding errors that scalesuperclassicallyas 1/Nby combining the principles of predictive coding and fast and tight inhibitory-excitatory balance. However, spike transmission delays preclude such fast inhibition, and computational studies have observed that such delays can cause pathological synchronization that in turn destroys superclassical coding performance. Intriguingly, it has also been observed in simulations that noise can actuallyimprovecoding performance, and that there exists some optimal level of noise that minimizes coding error. However, we lack a quantitative theory that describes this fascinating interplay between delays, noise and neural coding performance in spiking networks. In this work, we elucidate the mechanisms underpinning this beneficial role of noise by derivinganalyticalexpressions for coding error as a function of spike propagation delay and noise levels in predictive coding tight-balance networks of LIF neurons. Furthermore, we compute the minimal coding error and the associated optimal noise level, finding that they grow as power-laws with the delay. Our analysis reveals quantitatively how optimal levels of noise can rescue neural coding performance in spiking neural networks with delays by preventing the build up of pathological synchrony without overwhelming the overall spiking dynamics. This analysis can serve as a foundation for the further study of precise computation in the presence of noise and delays in efficient spiking neural circuits. 
    more » « less